这项工作提出了一种新的方法,可以使用有效的鸟类视图表示和卷积神经网络在高速公路场景中预测车辆轨迹。使用基本的视觉表示,很容易将车辆位置,运动历史,道路配置和车辆相互作用轻松包含在预测模型中。 U-NET模型已被选为预测内核,以使用图像到图像回归方法生成场景的未来视觉表示。已经实施了一种方法来从生成的图形表示中提取车辆位置以实现子像素分辨率。该方法已通过预防数据集(一个板载传感器数据集)进行了培训和评估。已经评估了不同的网络配置和场景表示。这项研究发现,使用线性终端层和车辆的高斯表示,具有6个深度水平的U-NET是最佳性能配置。发现使用车道标记不会改善预测性能。平均预测误差为0.47和0.38米,对于纵向和横向坐标的最终预测误差分别为0.76和0.53米,预测轨迹长度为2.0秒。与基线方法相比,预测误差低至50%。
translated by 谷歌翻译
Content moderation is the process of screening and monitoring user-generated content online. It plays a crucial role in stopping content resulting from unacceptable behaviors such as hate speech, harassment, violence against specific groups, terrorism, racism, xenophobia, homophobia, or misogyny, to mention some few, in Online Social Platforms. These platforms make use of a plethora of tools to detect and manage malicious information; however, malicious actors also improve their skills, developing strategies to surpass these barriers and continuing to spread misleading information. Twisting and camouflaging keywords are among the most used techniques to evade platform content moderation systems. In response to this recent ongoing issue, this paper presents an innovative approach to address this linguistic trend in social networks through the simulation of different content evasion techniques and a multilingual Transformer model for content evasion detection. In this way, we share with the rest of the scientific community a multilingual public tool, named "pyleetspeak" to generate/simulate in a customizable way the phenomenon of content evasion through automatic word camouflage and a multilingual Named-Entity Recognition (NER) Transformer-based model tuned for its recognition and detection. The multilingual NER model is evaluated in different textual scenarios, detecting different types and mixtures of camouflage techniques, achieving an overall weighted F1 score of 0.8795. This article contributes significantly to countering malicious information by developing multilingual tools to simulate and detect new methods of evasion of content on social networks, making the fight against information disorders more effective.
translated by 谷歌翻译
Spacecraft pose estimation is a key task to enable space missions in which two spacecrafts must navigate around each other. Current state-of-the-art algorithms for pose estimation employ data-driven techniques. However, there is an absence of real training data for spacecraft imaged in space conditions due to the costs and difficulties associated with the space environment. This has motivated the introduction of 3D data simulators, solving the issue of data availability but introducing a large gap between the training (source) and test (target) domains. We explore a method that incorporates 3D structure into the spacecraft pose estimation pipeline to provide robustness to intensity domain shift and we present an algorithm for unsupervised domain adaptation with robust pseudo-labelling. Our solution has ranked second in the two categories of the 2021 Pose Estimation Challenge organised by the European Space Agency and the Stanford University, achieving the lowest average error over the two categories.
translated by 谷歌翻译
In this paper, we consider the problem where a drone has to collect semantic information to classify multiple moving targets. In particular, we address the challenge of computing control inputs that move the drone to informative viewpoints, position and orientation, when the information is extracted using a "black-box" classifier, e.g., a deep learning neural network. These algorithms typically lack of analytical relationships between the viewpoints and their associated outputs, preventing their use in information-gathering schemes. To fill this gap, we propose a novel attention-based architecture, trained via Reinforcement Learning (RL), that outputs the next viewpoint for the drone favoring the acquisition of evidence from as many unclassified targets as possible while reasoning about their movement, orientation, and occlusions. Then, we use a low-level MPC controller to move the drone to the desired viewpoint taking into account its actual dynamics. We show that our approach not only outperforms a variety of baselines but also generalizes to scenarios unseen during training. Additionally, we show that the network scales to large numbers of targets and generalizes well to different movement dynamics of the targets.
translated by 谷歌翻译
Age-related macular degeneration (AMD) is a degenerative disorder affecting the macula, a key area of the retina for visual acuity. Nowadays, it is the most frequent cause of blindness in developed countries. Although some promising treatments have been developed, their effectiveness is low in advanced stages. This emphasizes the importance of large-scale screening programs. Nevertheless, implementing such programs for AMD is usually unfeasible, since the population at risk is large and the diagnosis is challenging. All this motivates the development of automatic methods. In this sense, several works have achieved positive results for AMD diagnosis using convolutional neural networks (CNNs). However, none incorporates explainability mechanisms, which limits their use in clinical practice. In that regard, we propose an explainable deep learning approach for the diagnosis of AMD via the joint identification of its associated retinal lesions. In our proposal, a CNN is trained end-to-end for the joint task using image-level labels. The provided lesion information is of clinical interest, as it allows to assess the developmental stage of AMD. Additionally, the approach allows to explain the diagnosis from the identified lesions. This is possible thanks to the use of a CNN with a custom setting that links the lesions and the diagnosis. Furthermore, the proposed setting also allows to obtain coarse lesion segmentation maps in a weakly-supervised way, further improving the explainability. The training data for the approach can be obtained without much extra work by clinicians. The experiments conducted demonstrate that our approach can identify AMD and its associated lesions satisfactorily, while providing adequate coarse segmentation maps for most common lesions.
translated by 谷歌翻译
视网膜脉管系统的研究是筛查和诊断许多疾病的基本阶段。完整的视网膜血管分析需要将视网膜的血管分为动脉和静脉(A/V)。早期自动方法在两个顺序阶段接近这些分割和分类任务。但是,目前,这些任务是作为联合语义分割任务处理的,因为分类结果在很大程度上取决于血管分割的有效性。在这方面,我们提出了一种新的方法,用于从眼睛眼睛图像中对视网膜A/V进行分割和分类。特别是,我们提出了一种新颖的方法,该方法与以前的方法不同,并且由于新的损失,将联合任务分解为针对动脉,静脉和整个血管树的三个分割问题。这种配置允许直观地处理容器交叉口,并直接提供不同靶血管树的精确分割罩。提供的关于公共视网膜图血管树提取(RITE)数据集的消融研究表明,所提出的方法提供了令人满意的性能,尤其是在不同结构的分割中。此外,与最新技术的比较表明,我们的方法在A/V分类中获得了高度竞争的结果,同时显着改善了血管分割。提出的多段方法允许检测更多的血管,并更好地分割不同的结构,同时实现竞争性分类性能。同样,用这些术语来说,我们的方法优于各种参考作品的方法。此外,与以前的方法相比,该方法允许直接检测到容器交叉口,并在这些复杂位置保留A/V的连续性。
translated by 谷歌翻译
雨林在全球生态系统中起着重要作用。但是,由于几个原因,它们的重要区域正面临森林砍伐和退化。创建了各种政府和私人计划,以监视和警报遥感图像增加森林砍伐的增加,并使用不同的方式处理显着的生成数据。公民科学项目也可以用于实现相同的目标。公民科学由涉及非专业志愿者进行分析,收集数据和使用其计算资源的科学研究组成,并在科学方面取得进步,并提高公众对特定知识领域的问题的理解,例如天文学,化学,数学和物理学。从这个意义上讲,这项工作提出了一个名为Foresteyes的公民科学项目,该项目通过对遥感图像的分析和分类来使用志愿者的答案来监视雨林中的森林砍伐区域。为了评估这些答案的质量,使用来自巴西法律亚马逊的遥感图像启动了不同的活动/工作流程,并将其结果与亚马逊森林砍伐监测项目生产的官方地面图进行了比较。在这项工作中,在2013年和2016年围绕着Rond \^onia州的前两个工作流程收到了35,000美元以上的$ 383 $志愿者的答复,$ 2,050 $ 2,050 $在发布后仅两周半就创建了任务。对于其他四个工作流程,甚至封闭了同一区域(Rond \^onia)和不同的设置(例如,图像分割方法,图像分辨率和检测目标),他们收到了$ 51,035美元的志愿者的答案,从$ 281的志愿者收取的$ 3,358 $ $ 3,358 $任务。在执行的实验中...
translated by 谷歌翻译
弗洛罗斯(Frolos)是一个python库,能够检测机器学习问题的漂移。它提供了用于漂移检测的经典和较新的算法的组合:受到监督和无监督,以及一些能够以半监督的方式行动的能力。我们设计了它的目的是与Scikit-Learn库轻松集成,并实现相同的应用程序编程界面。图书馆是根据一组最佳开发和持续整合实践开发的,以确保易于维护和可扩展性。源代码可在https://github.com/ifca/frouros上获得。
translated by 谷歌翻译
联合学习是一种数据解散隐私化技术,用于以安全的方式执行机器或深度学习。在本文中,我们介绍了有关联合学习的理论方面客户次数有所不同的用例。具体而言,使用从开放数据存储库中获得的胸部X射线图像提出了医学图像分析的用例。除了与隐私相关的优势外,还将研究预测的改进(就曲线下的准确性和面积而言)和减少执行时间(集中式方法)。将从培训数据中模拟不同的客户,以不平衡的方式选择,即,他们并非都有相同数量的数据。考虑三个或十个客户之间的结果与集中案件相比。间歇性客户将分析两种遵循方法,就像在实际情况下,某些客户可能会离开培训,一些新的新方法可能会进入培训。根据准确性,曲线下的区域和执行时间的结果,结果的结果的演变显示为原始数据被划分的客户次数。最后,提出了该领域的改进和未来工作。
translated by 谷歌翻译
在这项工作中,我们评估了人口模型和机器学习模型的合奏,以预测COVID-19大流行的不久的将来的演变,并在西班牙有特殊的用例。我们仅依靠开放和公共数据集,将发生率,疫苗接种,人类流动性和天气数据融合来喂养我们的机器学习模型(随机森林,梯度增强,K-Nearest邻居和内核岭回归)。我们使用发病率数据来调整经典人群模型(Gompertz,Logistic,Richards,Bertalanffy),以便能够更好地捕获数据的趋势。然后,我们整合了这两个模型家族,以获得更强大,更准确的预测。此外,我们已经观察到,当我们添加新功能(疫苗,移动性,气候条件)时,使用机器学习模型获得的预测有所改善,使用Shapley添加说明值分析了每个功能的重要性。就像在任何其他建模工作中一样,数据和预测质量都有多个局限性,因此必须从关键的角度看待它们,如我们在文本中所讨论的那样。我们的工作得出的结论是,这些模型的合奏使用可以改善单个预测(仅使用机器学习模型或仅使用人口模型),并且在由于缺乏相关数据而无法使用隔室模型的情况下,可以谨慎地应用。
translated by 谷歌翻译